HVT Scoring Cells with Layers using scoreLayeredHVT

Zubin Dowlaty, Srinivasan Sudarsanam, Somya Shambhawi, Vishwavani

2024-11-08

1. Abstract

The HVT package is a collection of R functions to facilitate building topology preserving maps for rich multivariate data analysis. Tending towards a big data preponderance, a large number of rows. A collection of R functions for this typical workflow is organized below:

  1. Data Compression: Vector quantization (VQ), HVQ (hierarchical vector quantization) using means or medians. This step compresses the rows (long data frame) using a compression objective.

  2. Data Projection: Dimension projection of the compressed cells to 1D,2D or Interactive surface plot with the Sammons Non-linear Algorithm. This step creates topology preserving map (also called as embeddings) coordinates into the desired output dimension.

  3. Tessellation: Create cells required for object visualization using the Voronoi Tessellation method, package includes heatmap plots for hierarchical Voronoi tessellations (HVT). This step enables data insights, visualization, and interaction with the topology preserving map. Useful for semi-supervised tasks.

  4. Scoring: Scoring new data sets and recording their assignment using the map objects from the above steps, in a sequence of maps if required.

  5. Temporal Analysis and Visualization: A Collection of new functions that leverages the capacity of the HVT package by analyzing time series data for its underlying patterns, calculation of transitioning probabilities and the visualizations for the flow of data over time.

2. Notebook Requirements

This chunk verifies the installation of all the necessary packages to successfully run this vignette, if not, installs them and attach all the packages in the session environment.

list.of.packages <- c("plyr", "dplyr", "reactable", "kableExtra", "geozoo",
                      "plotly", "purrr", "data.table", "gridExtra", "tidyr","HVT")

new.packages <-list.of.packages[!(list.of.packages %in% installed.packages()[, "Package"])]
if (length(new.packages))
  install.packages(new.packages, dependencies = TRUE, repos='https://cloud.r-project.org/')
invisible(lapply(list.of.packages, library, character.only = TRUE))

3. Example : HVT with the Torus dataset

In this section, we will see how we can use the package to visualize multidimensional data by projecting them to two dimensions using Sammon’s projection and further used for Scoring.

Data Understanding

First of all, let us see how to generate data for torus. We are using a library geozoo for this purpose. Geo Zoo (stands for Geometric Zoo) is a compilation of geometric objects ranging from three to 10 dimensions. Geo Zoo contains regular or well-known objects, eg cube and sphere, and some abstract objects, e.g. Boy’s surface, Torus and Hyper-Torus.

Here, we will generate a 3D torus (a torus is a surface of revolution generated by revolving a circle in three-dimensional space one full revolution about an axis that is coplanar with the circle) with 12000 points.

Raw Torus Dataset

The torus dataset includes the following columns:

Lets, explore the torus dataset containing 12000 points. For the sake of brevity we are displaying first 6 rows.

set.seed(240)
# Here p represents dimension of object, n represents number of points
torus <- geozoo::torus(p = 3,n = 12000)
torus_df <- data.frame(torus$points)
colnames(torus_df) <- c("x","y","z")
torus_df <- torus_df %>% round(4)
displayTable(head(torus_df))
x y z
-2.6282 0.5656 -0.7253
-1.4179 -0.8903 0.9455
-1.0308 1.1066 -0.8731
1.8847 0.1895 0.9944
-1.9506 -2.2507 0.2071
-1.4824 0.9229 0.9672

Now let’s have a look at structure of the torus dataset.

str(torus_df)
## 'data.frame':    12000 obs. of  3 variables:
##  $ x: num  -2.63 -1.42 -1.03 1.88 -1.95 ...
##  $ y: num  0.566 -0.89 1.107 0.19 -2.251 ...
##  $ z: num  -0.725 0.946 -0.873 0.994 0.207 ...

Data distribution

This section displays four objects.

Variable Histograms: The histogram distribution of all the features in the dataset.

Box Plots: Box plots for all the features in the dataset. These plots will display the median and Interquartile range of each column at a panel level.

Correlation Matrix: This calculates the Pearson correlation which is a bivariate correlation value measuring the linear correlation between two numeric columns. The output plot is shown as a matrix.

Summary EDA: The table provides descriptive statistics for all the features in the dataset.

It uses an inbuilt function called edaPlots to display the above-mentioned four objects.

edaPlots(torus_df, output_type = "summary", n_cols = 3)
edaPlots(torus_df, output_type = "histogram", n_cols = 3)

edaPlots(torus_df, output_type = "boxplot", n_cols = 3)

edaPlots(torus_df, output_type = "correlation", n_cols = 3)

Train - Test Split

Let us split the torus dataset into train and test. We will randomly select 80% of the torus dataset as train and remaining as test.

smp_size <- floor(0.80 * nrow(torus_df))
set.seed(279)
train_ind <- sample(seq_len(nrow(torus_df)), size = smp_size)
torus_train <- torus_df[train_ind, ]
torus_test <- torus_df[-train_ind, ]

Training Dataset

Now, lets have a look at the selected training dataset containing (9600 data points). For the sake of brevity we are displaying first six rows.

rownames(torus_train) <- NULL
displayTable(head(torus_train))
x y z
1.7958 -0.4204 -0.9878
0.7115 -2.3528 -0.8889
1.9285 1.2034 0.9620
1.0175 0.0344 -0.1894
-0.2736 1.1298 -0.5464
1.8976 2.2391 0.3545

Now lets have a look at structure of the training dataset.

str(torus_train)
## 'data.frame':    9600 obs. of  3 variables:
##  $ x: num  1.796 0.712 1.929 1.018 -0.274 ...
##  $ y: num  -0.4204 -2.3528 1.2034 0.0344 1.1298 ...
##  $ z: num  -0.988 -0.889 0.962 -0.189 -0.546 ...

Data Distribution

edaPlots(torus_train, output_type = "summary", n_cols = 3)
edaPlots(torus_train,output_type = "histogram", n_cols = 3)

edaPlots(torus_train, output_type = "boxplot", n_cols = 3)

edaPlots(torus_train, output_type = "correlation", n_cols = 3)

Testing Dataset

Now, lets have a look at testing dataset containing(2400 data points).For the sake of brevity we are displaying first six rows.

rownames(torus_test) <- NULL
displayTable(head(torus_test))
x y z
-2.6282 0.5656 -0.7253
2.7471 -0.9987 -0.3848
-2.4446 -1.6528 0.3097
-2.6487 -0.5745 0.7040
-0.2676 -1.0800 -0.4611
-1.1130 -0.6516 -0.7040

Now lets have a look at structure of the testing dataset.

str(torus_test)
## 'data.frame':    2400 obs. of  3 variables:
##  $ x: num  -2.628 2.747 -2.445 -2.649 -0.268 ...
##  $ y: num  0.566 -0.999 -1.653 -0.575 -1.08 ...
##  $ z: num  -0.725 -0.385 0.31 0.704 -0.461 ...

Data Distribution

edaPlots(torus_test, output_type = "summary", n_cols = 3)
edaPlots(torus_test,output_type = "histogram", n_cols = 3)

edaPlots(torus_test, output_type = "boxplot", n_cols = 3)

edaPlots(torus_test, output_type = "correlation", n_cols = 3)

4. Map A : Base Compressed Map

Let us try to visualize the compressed Map A from the diagram below.

Figure 1: Data Segregation with highlighted bounding box in red around compressed map A

Figure 1: Data Segregation with highlighted bounding box in red around compressed map A

This package can perform vector quantization using the following algorithms -

For more information on vector quantization, refer the following link.

The trainHVT function constructs highly compressed hierarchical Voronoi tessellations. The raw data is first scaled and this scaled data is supplied as input to the vector quantization algorithm. The vector quantization algorithm compresses the dataset until a user-defined compression percentage rate is achieved using a parameter called quantization error which acts as a threshold and determines the compression percentage. It means that for a given user-defined compression percentage we get the ‘n’ number of cells, then all of these cells formed will have a quantization error less than the threshold quantization error.

Let’s try to comprehend the trainHVT first before moving ahead.

trainHVT(
  data,
  min_compression_perc,
  n_cells,
  depth,
  quant.err,
  normalize,
  distance_metric = c("L1_Norm", "L2_Norm"),
  error_metric = c("mean", "max"),
  quant_method = c("kmeans", "kmedoids"),
  dim_reduction_method = c("sammon" , "tsne" , "umap")
  scale_summary = NA,
  diagnose = FALSE,
  hvt_validation = FALSE,
  train_validation_split_ratio = 0.8,
  projection.scale,
  tsne_perplexity,tsne_theta,tsne_verbose,
  tsne_eta,tsne_max_iter,
  umap_n_neighbors,umap_min_dist
)

Each of the parameters of trainHVT function have been explained below:

The output of trainHVT function (list of 7 elements) have been explained below with an image attached for clear understanding.

NOTE: Here the attached image is the snapshot of output list generated from map A which can be referred later in this section

Figure 2: The Output list generated by trainHVT function.

Figure 2: The Output list generated by trainHVT function.

We will use the trainHVT function to compress our data while preserving essential features of the dataset. Our goal is to achieve data compression upto atleast 80%. In situations where the compression ratio does not meet the desired target, we can explore adjusting the model parameters as a potential solution. This involves making modifications to parameters such as the quantization error threshold or increasing the number of cells and then rerunning the trainHVT function again.

As this is already done in HVT Vignette: please refer for more information.

Model Parameters

set.seed(240)
torus_mapA <- trainHVT(
  torus_train,
  n_cells = 500,
  depth = 1,
  quant.err = 0.1,
  normalize = FALSE,
  distance_metric = "L2_Norm",
  error_metric = "max",
  quant_method = "kmeans",
  dim_reduction_method = "sammon"
)

Let’s check the compression summary for torus.

displayTable(data = torus_mapA[[3]]$compression_summary)
segmentLevel noOfCells noOfCellsBelowQuantizationError percentOfCellsBelowQuantizationErrorThreshold parameters
1 500 448 0.9 n_cells: 500 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans

We successfully compressed 90% of the data using n_cells parameter as 500, the next step involves performing data projection on the compressed data. In this step, the compressed data will be transformed and projected onto a lower-dimensional space to visualize and analyze the data in a more manageable form.

As per the manual, torus_mapA[[3]] gives us detailed information about the hierarchical vector quantized data. torus_mapA[[3]][['summary']] gives a nice tabular data containing no of points, Quantization Error and the codebook.

The datatable displayed below is the summary from torus_mapA showing Cell.ID, Centroids and Quantization Error for each of the 500 cells. For the sake of brevity, we are displaying only the first 100 rows.

displayTable(data =torus_mapA[[3]][['summary']])
Segment.Level Segment.Parent Segment.Child n Cell.ID Quant.Error x y z
1 1 1 25 133 0.0754 -0.9156 -0.7427 0.5679
1 1 2 19 145 0.0634 -0.2122 -1.1651 -0.5760
1 1 3 14 174 0.0406 -1.0559 -0.0056 0.3274
1 1 4 9 491 0.0539 2.1556 1.8618 0.5252
1 1 5 18 199 0.0778 -1.6661 1.5286 -0.9607
1 1 6 18 306 0.082 1.7298 -1.1539 0.9915
1 1 7 24 85 0.0818 -2.3941 0.6431 -0.8653
1 1 8 15 164 0.043 -0.9748 -0.2892 0.1846
1 1 9 16 458 0.066 1.9507 1.3286 0.9284
1 1 10 22 413 0.0954 -0.0517 2.6839 -0.7177
1 1 11 11 495 0.0627 1.9150 2.1799 0.4220
1 1 12 13 30 0.0524 -1.7647 -1.7214 0.8813
1 1 13 10 317 0.0481 -0.6928 1.8956 0.9970
1 1 14 23 27 0.0867 -2.4987 -0.8635 -0.7539
1 1 15 17 358 0.0658 1.7985 -0.4216 -0.9854
1 1 16 16 209 0.0648 -1.4374 1.2731 -0.9923
1 1 17 10 479 0.0628 1.3518 2.5057 0.5270
1 1 18 12 295 0.0469 -0.0451 1.0310 0.2486
1 1 19 33 203 0.0691 0.6004 -1.2914 0.8125
1 1 20 16 465 0.0711 2.5842 0.6383 0.7389
1 1 21 13 370 0.0481 0.2766 1.5094 0.8838
1 1 22 15 426 0.0739 2.3650 -0.2072 0.9211
1 1 23 19 139 0.0652 0.3765 -1.7677 0.9790
1 1 24 23 131 0.0587 -0.5714 -1.0131 -0.5438
1 1 25 27 242 0.0742 0.7600 -0.8657 0.5270
1 1 26 27 330 0.0512 0.6382 0.7782 -0.1193
1 1 27 16 178 0.0901 -2.2044 1.9400 -0.3228
1 1 28 22 31 0.0997 -0.8157 -2.5174 -0.7516
1 1 29 16 163 0.047 -0.9712 -0.2575 -0.0968
1 1 30 19 37 0.1148 -0.7837 -2.4049 0.8305
1 1 31 25 175 0.0628 -1.1819 0.3363 -0.6372
1 1 32 21 363 0.0943 -0.8586 2.6294 -0.6279
1 1 33 19 355 0.0588 -0.4344 1.9713 0.9956
1 1 34 22 297 0.0795 1.3725 -0.7779 0.9020
1 1 35 24 108 0.1119 1.3745 -2.6472 -0.0551
1 1 36 23 249 0.0683 0.7506 -0.6591 0.0052
1 1 37 19 219 0.0785 -1.8693 2.1400 -0.5252
1 1 38 31 104 0.1022 1.1347 -2.6257 0.4806
1 1 39 19 245 0.1097 2.0277 -1.9262 -0.5756
1 1 40 27 36 0.0807 -2.3573 -1.0168 0.8147
1 1 41 9 300 0.0362 0.1642 0.9869 -0.0520
1 1 42 22 357 0.0754 1.7831 -0.7828 0.9925
1 1 43 8 485 0.0539 1.7097 2.1517 0.6587
1 1 44 16 424 0.0831 2.8101 -1.0120 0.0205
1 1 45 16 56 0.0765 -2.0418 -0.8413 0.9706
1 1 46 17 142 0.0529 -0.8765 -0.5927 -0.3367
1 1 47 17 492 0.0681 2.6908 1.2990 -0.0997
1 1 48 24 155 0.0585 -0.4021 -0.9851 0.3496
1 1 49 19 172 0.0552 0.1264 -1.2669 0.6868
1 1 50 15 21 0.08 -1.6337 -2.0437 0.7807
1 1 51 18 128 0.0794 -1.7937 0.6365 -0.9906
1 1 52 20 445 0.0846 2.6412 0.0680 -0.7608
1 1 53 37 220 0.075 0.4823 -0.8888 0.1403
1 1 54 17 59 0.0683 -1.9414 -0.7644 -0.9927
1 1 55 20 158 0.0816 1.2510 -2.1476 -0.8650
1 1 56 14 442 0.0886 0.3344 2.7347 0.6475
1 1 57 15 493 0.0768 1.6659 2.4284 -0.3058
1 1 58 10 345 0.0543 0.5899 0.9465 0.4687
1 1 59 25 273 0.068 1.1804 -0.5783 -0.7228
1 1 60 19 40 0.0901 -2.9076 0.0507 -0.3954
1 1 61 25 34 0.1174 -0.2558 -2.9583 0.1913
1 1 62 14 283 0.0547 -0.4012 1.3810 -0.8262
1 1 63 16 391 0.0839 1.3832 0.6606 0.8816
1 1 64 13 341 0.0514 0.2457 1.2283 0.6621
1 1 65 22 275 0.0876 -1.4875 2.3207 0.6386
1 1 66 16 462 0.0715 2.7701 0.2555 0.6112
1 1 67 23 126 0.0618 -1.0906 -0.5149 -0.6067
1 1 68 11 226 0.0742 -1.9014 2.0442 0.6011
1 1 69 32 375 0.1039 2.4512 -1.6789 0.1824
1 1 70 22 188 0.1065 1.7509 -2.3390 -0.3562
1 1 71 17 3 0.091 -2.3169 -1.8559 0.1943
1 1 72 20 266 0.0884 -1.3630 1.9413 0.9219
1 1 73 15 441 0.0832 0.3996 2.7494 -0.6144
1 1 74 16 89 0.0577 -0.7851 -1.3627 -0.9007
1 1 75 16 394 0.0605 1.6827 0.1413 0.9486
1 1 76 28 168 0.0663 -0.1737 -1.0064 0.2044
1 1 77 23 461 0.0887 2.9348 -0.1535 0.3179
1 1 78 21 153 0.0616 -1.0358 -0.2533 -0.3598
1 1 79 22 314 0.0535 1.2447 -0.3687 0.7101
1 1 80 12 166 0.0356 0.3590 -1.3396 -0.7894
1 1 81 23 136 0.0751 -0.4853 -1.1361 0.6432
1 1 82 26 88 0.113 -2.6890 0.8806 0.5280
1 1 83 11 453 0.069 1.2572 2.0398 0.9124
1 1 84 20 389 0.0842 -0.4279 2.5188 0.8245
1 1 85 12 490 0.0676 1.6271 2.4302 0.3672
1 1 86 24 84 0.1132 0.6161 -2.5678 0.7539
1 1 87 28 321 0.0693 1.4185 -0.2337 -0.8225
1 1 88 30 187 0.0968 0.7378 -1.6220 0.9705
1 1 89 25 152 0.0508 -0.8743 -0.4888 -0.0343
1 1 90 18 331 0.0477 0.5427 0.8526 0.1363
1 1 91 25 149 0.0757 -1.4934 0.4715 -0.8959
1 1 92 19 214 0.049 -0.8738 0.4866 0.0211
1 1 93 19 235 0.062 -0.9013 0.8858 0.6758
1 1 94 28 208 0.0853 0.4245 -1.0653 0.5218
1 1 95 16 487 0.1144 1.2771 2.6831 -0.1950
1 1 96 17 15 0.0934 -2.4985 -1.3406 0.5299
1 1 97 19 234 0.0542 -0.7038 0.7279 0.1612
1 1 98 17 228 0.0787 -1.5592 1.5319 0.9795
1 1 99 16 359 0.0841 -0.8193 2.4046 0.8325
1 1 100 17 417 0.0785 -0.1214 2.6907 0.7079

Now let us understand what each column in the above table means:

All the columns after this will contain centroids for each cell. They can also be called a codebook, which represents a collection of all centroids or codewords.

Now let’s try to understand plotHVT function. The parameters have been explained in detail below:

plotHVT <-(hvt.results, line.width, color.vec, pch1, centroid.size, 
           centroid.color,title, maxDepth, child.level, hmap.cols,
           quant.error.hmap, n_cells.hmap, label.size, 
           sepration_width, layer_opacity, cell_id,
           dim_size, plot.type = '2Dhvt')

Let’s plot the Voronoi tessellation for layer 1 (map A).

plotHVT(torus_mapA,
        line.width = c(0.4), 
        color.vec = c("navy blue"),
        centroid.size = 0.01,
        maxDepth = 1,
        plot.type = "2Dhvt") 

Figure 3: The Voronoi Tessellation for layer 1 (map A) shown for the 500 cells in the dataset ’torus’

4.1 Heatmaps

Now let’s plot the Voronoi Tessellation with the heatmap overlaid for all the features in the torus dataset for better visualization and interpretation of data patterns and distributions.

The heatmaps displayed below provides a visual representation of the spatial characteristics of the torus dataset, allowing us to observe patterns and trends in the distribution of each of the features (x,y,z). The sheer green shades highlight regions with higher values in each of the heatmaps, while the indigo shades indicate areas with the lowest values in each of the heatmaps. By analyzing these heatmaps, we can gain insights into the variations and relationships between each of these features within the torus dataset.

  plotHVT(
  torus_mapA,
  child.level = 1,
  hmap.cols = "x",
  line.width = c(0.2),
  color.vec = c("navy blue"),
  centroid.size = 0.1,
  plot.type = '2Dheatmap') 

Figure 4: The Voronoi Tessellation with the heat map overlaid for variable ’x’ in the ’torus’ dataset

  plotHVT(
  torus_mapA,
  child.level = 1,
  hmap.cols = "y",
  line.width = c(0.2),
  color.vec = c("navy blue"),
  centroid.size = 0.1,
  plot.type = '2Dheatmap') 

Figure 5: The Voronoi Tessellation with the heat map overlaid for variable ’y’ in the ’torus’ dataset

  plotHVT(
  torus_mapA,
  child.level = 1,
  hmap.cols = "z",
  line.width = c(0.2),
  color.vec = c("navy blue"),
  centroid.size = 0.1,
  plot.type = '2Dheatmap') 

Figure 6: The Voronoi Tessellation with the heat map overlaid for variable ’z’ in the ’torus’ dataset

5. Map B : Compressed Novelty Map

Let us try to visualize the Map B from the diagram below.

Figure 7: Data Segregation with highlighted bounding box in red around map B

Figure 7: Data Segregation with highlighted bounding box in red around map B

In this section, we will manually figure out the novelty cells from the plotted torus_mapA and store it in identified_Novelty_cells variable.

Note: For manual selecting the novelty cells from map A, one can enhance its interactivity by adding plotly elements to the code. This will transform map A into an interactive plot, allowing users to actively engage with the data. By hovering over the centroids of the cells, a tag containing segment child information will be displayed. Users can explore the map by hovering over different cells and selectively choose the novelty cells they wish to consider. Added an image for reference.

Figure 8: Manually selecting novelty cells

Figure 8: Manually selecting novelty cells

The removeNovelty function removes the identified novelty cell(s) from the training dataset (containing 9600 datapoints) and stores those records separately.

It takes input as the cell number (Segment.Child) of the manually identified novelty cell(s) and the compressed HVT map (torus_mapA) with 500 cells. It returns a list of two items: data with novelty, and data without novelty.

NOTE: As we are using torus dataset here, the identified novelty cells given are for demo purpose.

identified_Novelty_cells <<- c(273,44,61,486,185,425)   #as a example
output_list <- removeNovelty(identified_Novelty_cells, torus_mapA)
data_with_novelty <- output_list[[1]]
data_without_novelty <- output_list[[2]]

Let’s have a look at the data with novelty(containing 115 records).

novelty_data <- data_with_novelty
novelty_data$Row.No <- row.names(novelty_data)
novelty_data <- novelty_data %>% dplyr::select("Row.No","Cell.ID","Cell.Number","x","y","z")
colnames(novelty_data) <- c("Row.No","Cell.ID","Segment.Child","x","y","z")
displayTable(novelty_data)
Row.No Cell.ID Segment.Child x y z
1 424 44 2.7839 -1.0776 -0.1712
2 424 44 2.8089 -1.0384 0.1027
3 424 44 2.8404 -0.9040 0.1952
4 424 44 2.7834 -1.0866 0.1544
5 424 44 2.8208 -0.9473 0.2193
6 424 44 2.7804 -1.0582 -0.2226
7 424 44 2.8795 -0.8408 0.0226
8 424 44 2.7738 -1.1262 -0.1121
9 424 44 2.7538 -1.1860 -0.0569
10 424 44 2.8513 -0.9218 -0.0828
11 424 44 2.8754 -0.8550 0.0168
12 424 44 2.8450 -0.8996 0.1792
13 424 44 2.8239 -0.9397 0.2172
14 424 44 2.7871 -1.0527 -0.2026
15 424 44 2.7875 -1.1082 -0.0220
16 424 44 2.7661 -1.1507 0.0905
17 34 61 -0.3149 -2.9384 0.2958
18 34 61 -0.3078 -2.9675 0.1812
19 34 61 -0.1469 -2.9921 0.0927
20 34 61 -0.3766 -2.9762 0.0092
21 34 61 -0.0344 -2.9993 0.0303
22 34 61 -0.2807 -2.9525 0.2592
23 34 61 -0.3967 -2.9725 0.0484
24 34 61 -0.2519 -2.9034 0.4049
25 34 61 -0.3169 -2.9822 0.0443
26 34 61 -0.1057 -2.9757 0.2107
27 34 61 0.0958 -2.9784 0.1994
28 34 61 -0.3598 -2.9046 0.3757
29 34 61 -0.5300 -2.9485 0.0921
30 34 61 -0.2574 -2.9769 0.1544
31 34 61 -0.4312 -2.9677 0.0486
32 34 61 0.0796 -2.9885 0.1440
33 34 61 -0.2803 -2.9049 0.3957
34 34 61 -0.4258 -2.9397 0.2417
35 34 61 -0.3847 -2.9574 0.1871
36 34 61 -0.1814 -2.9475 0.3027
37 34 61 -0.4657 -2.9341 0.2396
38 34 61 -0.2817 -2.9829 0.0871
39 34 61 -0.3100 -2.9449 0.2759
40 34 61 -0.0367 -2.9262 0.3764
41 34 61 -0.0928 -2.9950 0.0848
42 75 185 -2.8203 0.9904 -0.1467
43 75 185 -2.8178 1.0260 0.0499
44 75 185 -2.7501 1.1484 -0.1977
45 75 185 -2.8307 0.8870 -0.2570
46 75 185 -2.9216 0.6631 -0.0905
47 75 185 -2.7794 1.1095 -0.1211
48 75 185 -2.8862 0.7563 -0.1801
49 75 185 -2.7889 1.0811 -0.1333
50 75 185 -2.8045 1.0304 0.1555
51 75 185 -2.8893 0.7432 -0.1815
52 75 185 -2.8085 1.0402 -0.1003
53 75 185 -2.7684 1.1089 -0.1877
54 75 185 -2.8008 1.0713 -0.0508
55 75 185 -2.8734 0.8593 -0.0420
56 75 185 -2.8926 0.7896 0.0560
57 75 185 -2.8014 1.0351 0.1638
58 75 185 -2.8382 0.9661 -0.0614
59 75 185 -2.7733 1.1066 -0.1675
60 75 185 -2.8765 0.8519 -0.0099
61 75 185 -2.9258 0.6607 -0.0332
62 75 185 -2.8318 0.9591 0.1427
63 439 273 2.9450 -0.5316 0.1218
64 439 273 2.9041 -0.7280 0.1098
65 439 273 2.9111 -0.6332 0.2030
66 439 273 2.9095 -0.6207 0.2223
67 439 273 2.8605 -0.7913 0.2510
68 439 273 2.9184 -0.6856 -0.0661
69 439 273 2.8971 -0.7568 0.1061
70 439 273 2.8758 -0.6541 0.3144
71 439 273 2.9496 -0.4882 0.1430
72 439 273 2.9188 -0.6454 0.1457
73 439 273 2.9351 -0.5220 0.1932
74 439 273 2.8530 -0.8358 0.2313
75 439 273 2.8969 -0.5663 0.3069
76 439 273 2.8809 -0.8085 0.1250
77 439 273 2.8340 -0.8588 0.2755
78 460 425 0.5660 2.9195 0.2270
79 460 425 0.4825 2.9331 -0.2327
80 460 425 0.2922 2.9667 0.1938
81 460 425 0.7219 2.8642 0.3005
82 460 425 0.5100 2.9548 0.0551
83 460 425 0.5103 2.9319 0.2180
84 460 425 0.6264 2.9337 -0.0202
85 460 425 0.4241 2.9696 -0.0208
86 460 425 0.4568 2.9565 -0.1292
87 460 425 0.4127 2.9640 0.1212
88 460 425 0.2388 2.9833 0.1195
89 460 425 0.4408 2.9674 0.0030
90 460 425 0.5544 2.9221 0.2254
91 460 425 0.3024 2.9847 0.0031
92 460 425 0.3711 2.9462 0.2453
93 460 425 0.4730 2.9532 0.1347
94 19 486 -0.9027 -2.8262 0.2552
95 19 486 -0.7470 -2.9053 0.0186
96 19 486 -0.9246 -2.8381 0.1728
97 19 486 -0.9065 -2.8593 0.0313
98 19 486 -0.7323 -2.9085 -0.0371
99 19 486 -1.0349 -2.7844 0.2410
100 19 486 -1.1207 -2.7825 0.0230

5.1 Voronoi Tessellation with highlighted novelty cell

The plotNovelCells function is used to plot the Voronoi tessellation using the compressed HVT map (torus_mapA) containing 500 cells and highlights the identified novelty cell(s) i.e 6 cells (containing 115 records) in red on the map.

plotNovelCells(identified_Novelty_cells, torus_mapA,line.width = c(0.4),centroid.size = 0.01)

Figure 9: The Voronoi Tessellation constructed using the compressed HVT map (map A) with the novelty cell(s) highlighted in red

We pass the dataframe with novelty records (115 records) to trainHVT function along with other model parameters mentioned below to generate map B (layer2)

Model Parameters

colnames(data_with_novelty) <- c("Cell.ID","Segment.Child","x","y","z")
data_with_novelty <- data_with_novelty[,-1:-2]
mapA_scale_summary = torus_mapA[[3]]$scale_summary
torus_mapB <- trainHVT(data_with_novelty,
                  n_cells = 11,   
                  depth = 1,
                  quant.err = 0.1,
                  normalize = FALSE,
                  distance_metric = "L2_Norm",
                  error_metric = "max",
                  quant_method = "kmeans",
                  dim_reduction_method = "sammon")

The datatable displayed below is the summary from map B (layer 2) showing Cell.ID, Centroids and Quantization Error for each of the 11 cells.

displayTable(data =torus_mapB[[3]][['summary']])
Segment.Level Segment.Parent Segment.Child n Cell.ID Quant.Error x y z
1 1 1 6 6 0.0497 -0.0341 -2.9882 0.1270
1 1 2 7 2 0.0672 -0.9392 -2.8448 -0.0046
1 1 3 9 10 0.097 0.4600 2.9390 0.1984
1 1 4 7 11 0.0621 0.4633 2.9571 -0.0488
1 1 5 15 8 0.082 2.8993 -0.6751 0.1789
1 1 6 21 9 0.101 -2.8324 0.9469 -0.0663
1 1 7 11 5 0.0514 -0.3795 -2.9641 0.1212
1 1 8 16 7 0.0831 2.8101 -1.0120 0.0205
1 1 9 8 4 0.073 -0.2520 -2.9278 0.3358
1 1 10 9 1 0.0481 -0.9761 -2.8015 0.2422
1 1 11 6 3 0.0622 -0.7308 -2.8890 0.1484

Now let’s check the compression summary for HVT (torus_mapB). The table below shows no of cells, no of cells having quantization error below threshold and percentage of cells having quantization error below threshold for each level.

displayTable(data = torus_mapB[[3]]$compression_summary)
segmentLevel noOfCells noOfCellsBelowQuantizationError percentOfCellsBelowQuantizationErrorThreshold parameters
1 11 10 0.91 n_cells: 11 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans

As it can be seen from the table above, 91% of the cells have hit the quantization threshold error.Since we are successfully able to attain the desired compression percentage, so we will not further subdivide the cells

6. Map C : Compressed Map without Novelty

Let us try to visualize the compressed Map C from the diagram below.

Figure 10:Data Segregation with highlighted bounding box in red around compressed map C

Figure 10:Data Segregation with highlighted bounding box in red around compressed map C

6.1 Iteration 1

With the Novelties removed, we construct another hierarchical Voronoi tessellation map C layer 2 on the data without Novelty (containing 9485 records) and below mentioned model parameters.

Model Parameters

torus_mapC <- trainHVT(dataset  = data_without_novelty,
                  n_cells = 10,
                  depth = 2,
                  quant.err = 0.1,
                  normalize = FALSE,
                  distance_metric = "L2_Norm",
                  error_metric = "max",
                  quant_method = "kmeans",
                  dim_reduction_method = "sammon")

Now let’s check the compression summary for HVT (torus_mapC) where n_cell was set to 15. The table below shows no of cells, no of cells having quantization error below threshold and percentage of cells having quantization error below threshold for each level.

displayTable(data = torus_mapC[[3]]$compression_summary)
segmentLevel noOfCells noOfCellsBelowQuantizationError percentOfCellsBelowQuantizationErrorThreshold parameters
1 10 0 0 n_cells: 10 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans
2 100 0 0 n_cells: 10 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans

As it can be seen from the table above, 0% of the cells have hit the quantization threshold error in level 1 and 0% of the cells have hit the quantization threshold error in level 2

6.2 Iteration 2

Since, we are yet to achive atleast 80% compression at depth 2. Let’s try to compress again using the below mentioned set of model parameters and the data without novelty (containing 9485 records).

Model Parameters

torus_mapC <- trainHVT(data_without_novelty,
                  n_cells = 46,    
                  depth = 2,
                  quant.err = 0.1,
                  normalize = FALSE,
                  distance_metric = "L2_Norm",
                  error_metric = "max",
                  quant_method = "kmeans",
                  dim_reduction_method = "sammon")

The datatable displayed below is the summary from map C (layer2). showing Cell.ID, Centroids and Quantization Error.

displayTable(data =torus_mapC[[3]][['summary']])
Segment.Level Segment.Parent Segment.Child n Cell.ID Quant.Error x y z
1 1 1 183 567 0.3054 -1.5739 2.3989 -0.0019
1 1 2 236 355 0.3085 -1.5176 -1.2760 -0.9229
1 1 3 162 88 0.2925 -1.7876 -2.2656 0.0079
1 1 4 167 1865 0.2886 2.5078 -1.4110 -0.1993
1 1 5 183 874 0.303 0.4550 -2.6700 -0.5138
1 1 6 251 1120 0.2282 -0.1585 1.0003 -0.0305
1 1 7 194 1576 0.251 1.3877 -0.0061 0.7561
1 1 8 196 1208 0.3194 -0.4306 2.5131 -0.7020
1 1 9 189 2042 0.2972 1.7211 2.2262 0.3548
1 1 10 273 609 0.2847 -1.1913 -0.1942 0.6043
1 1 11 248 1320 0.2812 0.2437 1.4647 0.8136
1 1 12 257 1537 0.2358 1.2573 0.0921 -0.6336
1 1 13 187 602 0.3037 -0.1334 -2.4336 0.7936
1 1 14 207 331 0.3187 -2.2996 1.3756 -0.5613
1 1 15 154 2118 0.2917 2.6593 1.1769 0.0397
1 1 16 288 1465 0.2804 0.5899 1.2696 -0.7664
1 1 17 148 2003 0.2886 2.7567 -0.1572 0.4931
1 1 18 269 886 0.2929 -0.9237 1.3992 -0.8903
1 1 19 170 153 0.3206 -2.5259 -0.5698 -0.7073
1 1 20 243 1251 0.233 0.8259 -0.6708 0.3379
1 1 21 189 1908 0.3097 1.8602 1.2506 -0.8799
1 1 22 265 880 0.2559 -0.1847 -1.0807 0.3879
1 1 23 265 467 0.3068 -1.6366 0.1148 -0.8901
1 1 24 258 807 0.2259 -0.9248 0.4324 -0.1556
1 1 25 184 1657 0.2823 1.9231 -1.1735 0.8763
1 1 26 151 352 0.317 -0.8253 -2.4341 -0.7089
1 1 27 264 695 0.2398 -0.8307 -0.6299 -0.2498
1 1 28 166 1387 0.2861 1.4533 -2.4119 0.3963
1 1 29 288 1404 0.231 0.7958 0.6328 0.1192
1 1 30 177 1852 0.2842 0.9376 2.4560 -0.6399
1 1 31 217 1101 0.2771 0.6991 -1.6243 0.9238
1 1 32 177 1362 0.2741 1.3774 -1.9334 -0.8316
1 1 33 182 287 0.2717 -2.0845 -0.4683 0.9331
1 1 34 238 815 0.2843 -0.1826 -1.4025 -0.7633
1 1 35 172 1983 0.2792 2.5545 0.1289 -0.7243
1 1 36 151 64 0.2853 -2.4620 -1.3984 0.3219
1 1 37 205 968 0.2864 -0.8344 2.0840 0.8889
1 1 38 238 1184 0.235 0.7357 -0.9624 -0.5816
1 1 39 221 436 0.2905 -1.1822 -1.5124 0.9261
1 1 40 172 1607 0.3226 0.2441 2.6498 0.6154
1 1 41 143 125 0.3125 -2.8632 0.0406 0.1978
1 1 42 138 1891 0.2356 2.1489 0.5767 0.9229
1 1 43 229 412 0.2999 -2.1320 1.1371 0.8097
1 1 44 170 1646 0.2449 1.8050 -0.8284 -0.9448
1 1 45 190 1753 0.2941 1.4176 1.2762 0.9341
1 1 46 230 814 0.2531 -1.0600 0.9346 0.7788
2 1 1 3 845 0.026 -1.1649 2.6315 0.4787
2 1 2 4 365 0.068 -2.0953 2.1084 0.2293
2 1 3 3 409 0.0895 -2.0095 2.1057 0.4101
2 1 4 5 542 0.0838 -1.5993 2.4100 -0.4497
2 1 5 3 585 0.0576 -1.5158 2.1369 -0.7836
2 1 6 3 497 0.09 -1.7610 2.3841 0.2574
2 1 7 2 604 0.0347 -1.4775 2.5879 -0.1968
2 1 8 5 434 0.1263 -1.8990 2.2555 -0.3090
2 1 9 2 766 0.0167 -1.2761 2.3121 -0.7675
2 1 10 4 957 0.097 -0.9648 2.7650 0.3670
2 1 11 7 665 0.1034 -1.4211 2.5979 0.2718
2 1 12 4 878 0.0728 -1.1082 2.7727 0.1636
2 1 13 6 505 0.1314 -1.6968 2.4514 -0.1797
2 1 14 2 836 0.0521 -1.1644 2.7452 -0.1835
2 1 15 5 498 0.1206 -1.7351 2.0639 -0.7148
2 1 16 2 343 0.0268 -2.2026 1.9548 0.3262
2 1 17 8 790 0.1368 -1.2759 2.6404 0.3534
2 1 18 7 593 0.1022 -1.5054 2.2877 -0.6707
2 1 19 3 784 0.0907 -1.2661 2.7124 0.1046
2 1 20 5 644 0.0536 -1.4650 2.4732 0.4837
2 1 21 4 648 0.128 -1.4266 2.5487 -0.3859
2 1 22 6 433 0.0853 -1.9083 2.1083 -0.5342
2 1 23 3 367 0.0506 -2.0796 2.1498 0.1302
2 1 24 6 479 0.1161 -1.7686 2.2618 -0.4858
2 1 25 3 341 0.0832 -2.1724 2.0650 0.0509
2 1 26 5 1004 0.0774 -0.8664 2.8631 0.1234
2 1 27 4 442 0.1287 -1.8908 2.3216 0.0870
2 1 28 3 627 0.0606 -1.4463 2.6269 0.0184
2 1 29 5 440 0.098 -1.9273 2.2210 0.3361
2 1 30 4 405 0.0741 -1.9860 2.2419 -0.0788
2 1 31 3 420 0.0702 -2.0019 1.9997 0.5566
2 1 32 4 753 0.066 -1.3255 2.6828 -0.1133
2 1 33 4 809 0.053 -1.2001 2.5799 -0.5328
2 1 34 4 916 0.1249 -1.0259 2.8079 -0.1265
2 1 35 6 735 0.113 -1.3437 2.4815 -0.5662
2 1 36 4 601 0.0973 -1.5304 2.3461 0.5964
2 1 37 5 910 0.1047 -1.0531 2.7543 -0.3117
2 1 38 7 553 0.1275 -1.6121 2.4196 0.4125
2 1 39 2 336 0.0558 -2.1776 2.0474 -0.1385
2 1 40 4 493 0.1188 -1.8162 2.0629 0.6600
2 1 41 2 379 0.0386 -2.1126 1.9722 0.4543
2 1 42 2 587 0.0464 -1.5155 2.5877 0.0088
2 1 43 3 555 0.0867 -1.6299 2.2044 0.6676
2 1 44 2 494 0.0366 -1.7981 2.2223 0.5113
2 1 45 3 615 0.0269 -1.5055 2.2826 0.6786
2 1 46 2 334 0.0197 -2.2151 2.0020 0.1682
2 2 1 6 297 0.0708 -1.8502 -0.8413 -0.9991
2 2 2 6 351 0.0651 -1.3349 -1.5444 -0.9981
2 2 3 4 386 0.0701 -1.4977 -1.0432 -0.9843
2 2 4 6 448 0.092 -1.3489 -0.9516 -0.9361
2 2 5 7 346 0.106 -1.6711 -0.8996 -0.9941
2 2 6 7 475 0.1113 -0.9288 -1.5746 -0.9827
2 2 7 5 178 0.0719 -2.1363 -1.2202 -0.8868
2 2 8 5 575 0.0737 -0.8748 -1.1437 -0.8278

Now let’s check the compression summary for HVT (torus_mapC). The table below shows no of cells, no of cells having quantization error below threshold and percentage of cells having quantization error below threshold for each level.

displayTable(data = torus_mapC[[3]]$compression_summary)
segmentLevel noOfCells noOfCellsBelowQuantizationError percentOfCellsBelowQuantizationErrorThreshold parameters
1 46 0 0 n_cells: 46 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans
2 2116 1748 0.83 n_cells: 46 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans

As it can be seen from the table above, 0% of the cells have hit the quantization threshold error in level 1 and 83% of the cells have hit the quantization threshold error in level 2

Let’s plot the Voronoi tessellation for layer 2 (map C)

plotHVT(torus_mapC,
        line.width = c(0.2,0.1), 
        color.vec = c("navyblue","steelblue"),
        centroid.size = 0.1,
        maxDepth = 2, 
        plot.type = '2Dhvt')

Figure 11: The Voronoi Tessellation for layer 2 (map C) shown for the 928 cells in the dataset ’torus’ at level 2

6.3 Heatmaps

Now let’s plot all the features for each cell at level two as a heatmap for better visualization.

The heatmaps displayed below provides a visual representation of the spatial characteristics of the torus dataset, allowing us to observe patterns and trends in the distribution of each of the features (x,y,z). The sheer green shades highlight regions with higher values in each of the heatmaps, while the indigo shades indicate areas with the lowest values in each of the heatmaps. By analyzing these heatmaps, we can gain insights into the variations and relationships between each of these features within the torus dataset.

  plotHVT(
  torus_mapC,
  child.level = 2,
  hmap.cols = "x",
  line.width = c(0.2,0.1),
  color.vec = c("navyblue","steelblue"),
  centroid.size = 0.1,
  plot.type = '2Dheatmap') 

Figure 12: The Voronoi Tessellation with the heat map overlaid for feature x in the ’torus’ dataset

  plotHVT(
  torus_mapC,
  child.level = 2,
  hmap.cols = "y",
  line.width = c(0.2,0.1),
  color.vec = c("navyblue","steelblue"),
  centroid.size = 0.1,
  plot.type = '2Dheatmap') 

Figure 13: The Voronoi Tessellation with the heat map overlaid for feature y in the ’torus’ dataset

  plotHVT(
  torus_mapC,
  child.level = 2,
  hmap.cols = "z",
  line.width = c(0.2,0.1),
  color.vec = c("navyblue","steelblue"),
  centroid.size = 0.1,
  plot.type = '2Dheatmap') 

Figure 14: The Voronoi Tessellation with the heat map overlaid for feature z in the ’torus’ dataset

We now have the set of maps (map A, map B & map C) which will be used to score, which map and cell each test record is assigned to.

7. Scoring

Now once we have built the model, let us try to score using our testing dataset (containing 2400 data points) which cell and which layer each point belongs to.

The scoreLayeredHVT function is used to score the testing dataset using the scored set of maps. This function takes an input - a testing dataset and a set of maps (map A, map B, map C).

Now, Let us understand the scoreLayeredHVT function.

scoreLayeredHVT(data,
                hvt_mapA,
                hvt_mapB,
                hvt_mapC,
                child.level = 1,
                mad.threshold = 0.2,
                normalize = TRUE,
                distance_metric="L1_Norm",
                error_metric="max",
                yVar)

Each of the parameters of scoreLayeredHVT function has been explained below:

Before that, the approach of scoreLayeredHVT function is to use scoreHVT function to score the test data against the given results of trainHVT which is referred as ‘map’ here. Hence the scoreLayeredHVT scores the test dataset against map A, B & C and further process and merge the final output. So the arguments used in scoreHVT is important here for smooth execution of function.

When normalize is set to TRUE, the scoreHVT function has an inbuilt feature to standardize the testing dataset based on the mean and standard deviation of the training dataset from the trainHVT results.

The function score based on the HVT maps - map A, map B and map C, constructed using trainHVT function. For each test record, the function will assign that record to Layer1 or Layer2. Layer1 contains the cell ids from map A and Layer 2 contains cell ids from map B (novelty map) and map C (map without novelty).

Scoring Algorithm

The Scoring algorithm recursively calculates the distance between each point in the testing dataset and the cell centroids for each level. The following steps explain the scoring method for a single point in the test dataset:

  1. Calculate the distance between the point and the centroid of all the cells in the first level.
  2. Find the cell whose centroid has minimum distance to the point.
  3. Check if the cell drills down further to form more cells.
  4. If it doesn’t, return the path. Or else repeat steps 1 to 4 till we reach a level at which the cell doesn’t drill down further.

Note : The Scoring algorithm will not work if some of the variables used to perform quantization are missing. In the testing dataset, we should not remove any features.

validation_data <- torus_test
new_score <- scoreLayeredHVT(
    data=validation_data,
    hvt_mapA = torus_mapA,
    hvt_mapB = torus_mapB,
    hvt_mapC = torus_mapC,
    normalize = FALSE )

Let’s see which cell and layer each point belongs to and check the Mean Absolute Difference for each of the 2400 records. For the sake of brevity, we are only displaying the first 100 rows.

displayTable(new_score[["actual_predictedTable"]])
Row.Number act_x act_y act_z Layer1.Cell.ID Layer2.Cell.ID pred_x pred_y pred_z diff
1 -2.6282 0.5656 -0.7253 A85 C153 -2.5259 -0.5698 -0.7073 0.4186
2 2.7471 -0.9987 -0.3848 A425 C1865 2.5078 -1.4110 -0.1993 0.2790
3 -2.4446 -1.6528 0.3097 A3 C64 -2.4620 -1.3984 0.3219 0.0947
4 -2.6487 -0.5745 0.7040 A41 C287 -2.0845 -0.4683 0.9331 0.2998
5 -0.2676 -1.0800 -0.4611 A157 C815 -0.1826 -1.4025 -0.7633 0.2366
6 -1.1130 -0.6516 -0.7040 A126 C695 -0.8307 -0.6299 -0.2498 0.2527
7 2.0288 1.9519 0.5790 A491 C2042 1.7211 2.2262 0.3548 0.2688
8 -2.4799 1.6863 -0.0470 A140 C331 -2.2996 1.3756 -0.5613 0.3351
9 -0.4105 -1.1610 -0.6398 A119 C815 -0.1826 -1.4025 -0.7633 0.1976
10 -0.2545 -1.6160 -0.9314 A83 C815 -0.1826 -1.4025 -0.7633 0.1512
11 1.1500 0.3945 -0.6205 A352 C1537 1.2573 0.0921 -0.6336 0.1409
12 -1.2557 -1.1369 0.9520 A67 C436 -1.1822 -1.5124 0.9261 0.1583
13 -0.5449 -2.6892 -0.6684 A43 C352 -0.8253 -2.4341 -0.7089 0.1920
14 2.9093 0.7222 -0.0697 A478 C2118 2.6593 1.1769 0.0397 0.2714
15 2.3205 1.2520 -0.7711 A476 C1908 1.8602 1.2506 -0.8799 0.1902
16 1.4772 -0.5194 -0.9008 A298 C1646 1.8050 -0.8284 -0.9448 0.2270
17 -1.3176 -2.6541 0.2690 A11 C88 -1.7876 -2.2656 0.0079 0.3732
18 1.0687 0.1211 -0.3812 A316 C1537 1.2573 0.0921 -0.6336 0.1566
19 -0.9632 0.3283 -0.1866 A195 C807 -0.9248 0.4324 -0.1556 0.0578
20 2.5616 0.4634 0.7976 A465 C1891 2.1489 0.5767 0.9229 0.2171
21 2.8473 -0.9303 -0.0955 A424 B7 2.8101 -1.0120 0.0205 0.0783
22 -0.5293 -0.8566 0.1173 A154 C880 -0.1847 -1.0807 0.3879 0.2798
23 -1.9898 -2.1766 0.3150 A2 C88 -1.7876 -2.2656 0.0079 0.1994
24 -0.8845 -1.2219 -0.8709 A105 C355 -1.5176 -1.2760 -0.9229 0.2464
25 0.1553 2.2566 0.9651 A405 C1607 0.2441 2.6498 0.6154 0.2772
26 2.4262 -0.6069 -0.8655 A383 C1646 1.8050 -0.8284 -0.9448 0.3073
27 -0.0667 -1.4627 -0.8444 A120 C815 -0.1826 -1.4025 -0.7633 0.0857
28 -0.0655 -1.3311 -0.7448 A151 C815 -0.1826 -1.4025 -0.7633 0.0690
29 1.9592 1.5104 0.8806 A458 C1753 1.4176 1.2762 0.9341 0.2764
30 1.2332 2.5452 0.5603 A479 C2042 1.7211 2.2262 0.3548 0.3375
31 -0.8720 0.4903 0.0287 A214 C807 -0.9248 0.4324 -0.1556 0.0983
32 0.2194 -1.7686 0.9760 A139 C1101 0.6991 -1.6243 0.9238 0.2254
33 1.5052 0.0445 -0.8694 A351 C1537 1.2573 0.0921 -0.6336 0.1771
34 -2.8410 -0.8651 0.2439 A17 C125 -2.8632 0.0406 0.1978 0.3247
35 1.3203 -2.5967 0.4077 A104 C1387 1.4533 -2.4119 0.3963 0.1097
36 -1.5648 1.5577 0.9781 A228 C412 -2.1320 1.1371 0.8097 0.3854
37 0.3589 -1.0419 -0.4400 A205 C1184 0.7357 -0.9624 -0.5816 0.1993
38 -0.2900 -2.0106 0.9995 A76 C602 -0.1334 -2.4336 0.7936 0.2618
39 0.5300 1.3668 0.8455 A374 C1320 0.2437 1.4647 0.8136 0.1387
40 1.0254 -0.6738 0.6344 A279 C1251 0.8259 -0.6708 0.3379 0.1663
41 -0.9306 0.3664 0.0154 A214 C807 -0.9248 0.4324 -0.1556 0.0810
42 2.3888 -1.0670 0.7875 A384 C1657 1.9231 -1.1735 0.8763 0.2203
43 -0.9830 -0.2043 -0.0897 A163 C695 -0.8307 -0.6299 -0.2498 0.2460
44 0.9499 0.3135 0.0261 A326 C1404 0.7958 0.6328 0.1192 0.1888
45 -1.8079 -1.4936 0.9386 A44 C436 -1.1822 -1.5124 0.9261 0.2190
46 1.8399 -1.9295 -0.7459 A245 C1362 1.3774 -1.9334 -0.8316 0.1840
47 -0.3304 -1.8481 0.9925 A76 C602 -0.1334 -2.4336 0.7936 0.3271
48 -2.2806 -1.8984 0.2536 A3 C64 -2.4620 -1.3984 0.3219 0.2499
49 -2.3323 1.7320 0.4252 A207 C412 -2.1320 1.1371 0.8097 0.3932
50 0.5520 0.8441 0.1308 A331 C1404 0.7958 0.6328 0.1192 0.1556
51 -0.9449 2.2273 0.9078 A289 C968 -0.8344 2.0840 0.8889 0.0909
52 0.2334 -1.4612 -0.8540 A132 C815 -0.1826 -1.4025 -0.7633 0.1885
53 2.7387 0.9703 0.4244 A481 C2118 2.6593 1.1769 0.0397 0.2235
54 0.3561 1.1619 -0.6199 A340 C1465 0.5899 1.2696 -0.7664 0.1627
55 1.7006 1.5569 -0.9522 A452 C1908 1.8602 1.2506 -0.8799 0.1794
56 1.7244 -0.5698 0.9829 A357 C1657 1.9231 -1.1735 0.8763 0.3030
57 0.9922 1.1438 -0.8741 A373 C1465 0.5899 1.2696 -0.7664 0.2120
58 -0.3022 -1.3611 0.7956 A130 C880 -0.1847 -1.0807 0.3879 0.2685
59 -0.9693 1.0602 0.8261 A236 C814 -1.0600 0.9346 0.7788 0.0879
60 1.1313 -0.3595 -0.5824 A294 C1537 1.2573 0.0921 -0.6336 0.2096
61 -0.7561 -2.5384 -0.7611 A31 C352 -0.8253 -2.4341 -0.7089 0.0752
62 2.3168 1.8924 0.1302 A499 C2118 2.6593 1.1769 0.0397 0.3828
63 1.2363 -2.6444 -0.3939 A108 C874 0.4550 -2.6700 -0.5138 0.3089
64 -1.3204 -0.6281 0.8430 A111 C609 -1.1913 -0.1942 0.6043 0.2673
65 1.3733 1.1877 0.9829 A409 C1753 1.4176 1.2762 0.9341 0.0605
66 1.0874 -0.1278 0.4251 A333 C1576 1.3877 -0.0061 0.7561 0.2510
67 2.1300 -1.2171 -0.8914 A301 C1646 1.8050 -0.8284 -0.9448 0.2557
68 1.6863 -0.5945 0.9773 A357 C1657 1.9231 -1.1735 0.8763 0.3056
69 0.8504 1.0927 -0.7882 A373 C1465 0.5899 1.2696 -0.7664 0.1531
70 0.3029 1.0731 0.4656 A336 C1320 0.2437 1.4647 0.8136 0.2663
71 -1.4724 1.1331 0.9899 A210 C814 -1.0600 0.9346 0.7788 0.2740
72 -0.5452 -1.2243 0.7514 A136 C880 -0.1847 -1.0807 0.3879 0.2892
73 -1.6866 2.1137 0.7101 A226 C968 -0.8344 2.0840 0.8889 0.3536
74 1.2012 -2.0386 -0.9305 A158 C1362 1.3774 -1.9334 -0.8316 0.1268
75 -0.2108 2.3579 0.9301 A405 C968 -0.8344 2.0840 0.8889 0.3129
76 -0.5982 1.3776 -0.8671 A265 C886 -0.9237 1.3992 -0.8903 0.1234
77 -0.2116 -1.0573 -0.3878 A157 C815 -0.1826 -1.4025 -0.7633 0.2499
78 -0.7802 -0.9000 -0.5880 A118 C695 -0.8307 -0.6299 -0.2498 0.2196
79 1.0850 -1.6815 1.0000 A196 C1101 0.6991 -1.6243 0.9238 0.1731
80 1.5563 0.1715 -0.9008 A351 C1537 1.2573 0.0921 -0.6336 0.2152
81 -0.3790 1.4273 0.8522 A318 C1320 0.2437 1.4647 0.8136 0.2329
82 -1.2769 -0.2633 0.7178 A122 C609 -1.1913 -0.1942 0.6043 0.0894
83 -1.6039 2.4566 0.3575 A257 C567 -1.5739 2.3989 -0.0019 0.1490
84 -0.9297 2.4281 -0.8000 A309 C1208 -0.4306 2.5131 -0.7020 0.2274
85 0.5324 -0.8526 0.1016 A220 C1251 0.8259 -0.6708 0.3379 0.2372
86 0.3928 1.5433 -0.9132 A362 C1465 0.5899 1.2696 -0.7664 0.2058
87 1.0031 0.3850 -0.3786 A327 C1537 1.2573 0.0921 -0.6336 0.2673
88 -0.7562 0.7889 -0.4207 A232 C807 -0.9248 0.4324 -0.1556 0.2634
89 -1.0870 -0.7523 -0.7350 A102 C695 -0.8307 -0.6299 -0.2498 0.2880
90 -1.8671 -0.8423 -0.9988 A59 C355 -1.5176 -1.2760 -0.9229 0.2864
91 0.8325 -0.9413 0.6689 A242 C1251 0.8259 -0.6708 0.3379 0.2027
92 -0.3355 0.9636 0.2005 A277 C1120 -0.1585 1.0003 -0.0305 0.1483
93 -1.0089 -0.6007 0.5639 A133 C609 -1.1913 -0.1942 0.6043 0.2098
94 1.7725 1.7153 -0.8845 A452 C1908 1.8602 1.2506 -0.8799 0.1857
95 0.5539 -0.8888 0.3037 A220 C1251 0.8259 -0.6708 0.3379 0.1747
96 0.8149 -2.6016 0.6874 A84 C1387 1.4533 -2.4119 0.3963 0.3731
97 0.1104 1.7654 -0.9729 A379 C1465 0.5899 1.2696 -0.7664 0.3939
98 1.0107 0.3118 0.3349 A326 C1404 0.7958 0.6328 0.1192 0.2505
99 2.2697 -0.3642 0.9543 A403 C1891 2.1489 0.5767 0.9229 0.3643
100 0.4983 -0.8672 -0.0185 A220 C1251 0.8259 -0.6708 0.3379 0.2935
hist(new_score[["actual_predictedTable"]]$diff, breaks = 30, col = "blue", main = "Mean Absolute Difference", xlab = "Difference")

Figure 16: Mean Absolute Difference

8. Executive Summary

9. References

  1. Topology Preserving Maps

  2. Vector Quantization

  3. K-means

  4. Sammon’s Projection

  5. Voronoi Tessellations